-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] New persistence model #4527
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Instead of trying to find faulty keys, we stick to the mechanics we have in place. This way, we have less false positives. However, we enforce a certain Bisq version format, as parsing and business logic relies on that now.
Turns out, scanning resources does not work reliable enough. Thus, an array of Strings denoting historical Bisq versions it is. An other way to put it is it is an array denoting which data stores are there in the resources.
If we use the diff sync between seed nodes we create a race condition where when a seednode gets updated to the new system, it does not sync up properly with the other seed nodes. And that would be fatal. So for the time being, when a seednode asks for data, it uses the "old" big requests with all object keys. Should not be a problem for now since they have enough bandwidth.
CI does have troubles with tests which do file operations. Thus, these tests have been disabled. They have been useful during development and are useful for testing locally, though.
This test addresses the migration scenario where a user does not upgrade on the first possible occation (to the first Bisq version that has the new database structure in place) but does so later.
Do not clone data anymore as not needed.
Remove unneeded changed checks, remove unused method
We do not call persist at each handleComplete of the task runner (each task completion) but at the TaskRunner completion instead. The tasks should still deal with their own persisting on important data changes. As we persist at shutdown anyway there is not much risk even if some persist calls are missing.
…adeManager instead We have on case where set the state from inside the trade via setConfirmedState. As the confidence has its listener inside the trade as well this cannot be easily changed. But as when confidence of deposit tx changes we get anyway a outside task triggered and at restart we apply the confidence a confidence anyway so persistence is not really needed. As we persist at shutdown as well we have it covered there as well. The confidence handling inside the trade should be refactored at some point so that trade does not get overloaded that much with behaviour code. Should be much more a value object....
…eHandler Rename methods and fields Return early in method
…ead call readStore direclty.
# Conflicts: # common/src/main/java/bisq/common/proto/persistable/UserThreadMappedPersistableEnvelope.java # common/src/main/java/bisq/common/storage/FileManager.java # p2p/src/main/java/bisq/network/p2p/peers/PeerManager.java
…persistence-model-merged # Conflicts: # common/src/main/java/bisq/common/file/CorruptedStorageFileHandler.java # core/src/main/java/bisq/core/account/sign/SignedWitnessStore.java # core/src/main/java/bisq/core/account/witness/AccountAgeWitnessStore.java # core/src/main/java/bisq/core/dao/governance/blindvote/storage/BlindVoteStore.java # core/src/main/java/bisq/core/dao/governance/proposal/storage/appendonly/ProposalStore.java # core/src/main/java/bisq/core/trade/statistics/TradeStatistics2StorageService.java # core/src/main/java/bisq/core/trade/statistics/TradeStatistics2Store.java # p2p/src/main/java/bisq/network/p2p/storage/P2PDataStorage.java # p2p/src/main/java/bisq/network/p2p/storage/persistence/AppendOnlyDataStoreService.java # p2p/src/main/java/bisq/network/p2p/storage/persistence/PersistableNetworkPayloadStore.java
Merged
Replaced by #4589 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
While reseaching for the app-freeze problem a user reported I looked closer to the changes in our persistence framework. There have been added thread-safety to PersistableEnvelope classes. While trying to figure out if those changes might cause a deadlock in some rare edge cases I came to the conclusion that the overall model is not reflecting well our requirements. Worse, it had a lot of resource waste by creating new threads very frequently which got terminated mostly without doing actual work.
Beside that I questioned the need to persist the data that frequently. We read data only at startup (with a few irrelevant exceptions). Writing the data at shutdown would be sufficient for most type of data. The resource heavy data are the DaoState and the TradeStatistics as well as SequenceNumberMap which gets updated very frequently. All those data have very low priority and can be recovered in case of a sever crash. For more critical data like disputeList or trades we use a timer which writes to disk a minute after a persist request got called. We could improve that by calling a persistNow just at the moments when critical state changes happen (e.g. in a trade). Atm I left all the write requests as it was before and they basically trigger a request at any data change of an object.
To run those operations on the user thread avoids all the issues with threading and risk of deadlocks. The total performance is for sure better as before as it removed a lot of wasted efforts. If really needed we could introduce a threaded version as well with callback handling, but I guess that is not needed.
The changes are quite heavy over many areas. Some are pure refactoring but many are also design changes, like that the data objects are not responsible themselves anymore for persistence but their owner (e.g. trademanager not trade). So that made the overall architecture more clear.
I leave it WIP for now as it needs much more testing and I might add persistNow calls for the critical state changes and might eliminate request calls for irrelevant data changes. There is also a test broken which is commented out atm and should be re-enabled. Not my forte dealing with unit test framework magic...